"CB" (jrcb)
02/24/2018 at 08:51 • Filed to: xkcd | 2 | 13 |
Furthermore, I don’t know why we expect perfection from robots when people are such garbage at driving themselves. Maybe people just prefer a feeling of control if things are going to go wrong, rather than putting all their faith in something they know will make mistakes.
( !!!error: Indecipherable SUB-paragraph formatting!!! )
Ash78, voting early and often
> CB
02/24/2018 at 08:59 | 1 |
It’s simple: with humans, there’s an Accuser and an Accused, then they can hash out the details in court about distractions, DUI, negligence, etc, all based on legal precedent and relatively objective laws.
I’m anxious to see how many coders, logicians, or CEOs have to take the stand to try to avoid blame the first few times an autonomous car has to make a purely logical decision that involves loss of life.
Nibby
> CB
02/24/2018 at 09:02 | 0 |
“people are such garbage”
Mercedes Streeter
> CB
02/24/2018 at 09:18 | 0 |
Shoot, this idea works with any idea that people try to fearmonger with.
If it’s not happening now, why should we expect anything different?
RamblinRover Luxury-Yacht
> CB
02/24/2018 at 09:26 | 9 |
To put it in blunt, simple terms, it’s not the process efficiency or correctness in results, it’s the error handling. On two levels. The first level is “rolling with” the unexpected, and the second is blame. It is comparatively easy to develop a car that drives itself correctly when all the roads are correct, other drivers drive perfectly, and no complicating factors arise - but since actual adaptive systems are so bloody hard, when something odd happens, the computer will have to rely on a reference for response or ignore. That raft of automated cars crashing a while back, nearly all of which were “other driver at fault”... yeah, no shit, Sherlock. Somebody did something stupid, but rather than responding to casual dumbfuckery in a casual and fluid way, an appropriate response was not given.
I mean, nearly every day on the road, somebody in your active sphere will do something very badly wrong. It’s just how it is. Most people either don’t respond or respond preventively as needed, according to learned instinct.
Which brings us to the mistakes. While people are often good at the “ain’t care” sort of rolling with the punches and AI (when monstrously complex) can approach that, there are still fuckups. At which time, blame-finding must take place, and that blame-finding will highlight specific inadequacy in AI error-handling, but without providing a discrete entity to blame. Sure, this means that a hotfix can get published so it won’t happen next time (probably), but there’s not been a single, personal point of failure - and because any such error imputes to the whole company, it invites a perverse incentive to reply “NO, THE MACHINE IS PERFECT, AI CAR IS A BLAMELESS AND HOLY CREATURE”. With the whole financial and influence base of the self-driving car company behind it, and the horror of personal injury tort on the other side. A recipe for courtroom cancer and nothing getting fixed.
The most underappreciated part of perfection is the ability to handle imperfection, and sometimes complimentary imperfection is hard to beat; which sets a really difficult standard for perfection.
If only EssExTee could be so grossly incandescent
> RamblinRover Luxury-Yacht
02/24/2018 at 10:01 | 0 |
I think the two big things we need to work on are object recognition and trend prediction.
Current object recognition seems to rely on matching photos to databases and is both a lot slower and less accurate than our own ability to process input. We can look at something and know “that’s a tree” almost before our eyes have settled on it. Can AI do the same thing yet? What happens if a newspaper blows in front of the car on the highway? It needs to be able to tell that it’s safe to hit and that the car doesn’t need to slam on the brakes in fast moving traffic.
It’s easy to program an AI to respond to stimuli, but we need ones that can tell when there’s about to be something to respond to. A self driving car should be able say “this object near me is a car, and it’s behaving erratically, best to flag it and steer clear” in the same way that a human does.
Chariotoflove
> RamblinRover Luxury-Yacht
02/24/2018 at 10:04 | 1 |
Eric @ opposite-lock.com
> CB
02/24/2018 at 10:33 | 2 |
I hate to be that guy, but I don’t at the same time...
Aside from the Tesla crashes (which is not true autonomous driving tech) and a couple crashes that were totally the fault of the other driver, the existing mediocre autonomous cars are already better than you at driving. They have already driven more miles than any human driver does with no accidents, even counting those where fault was with the other driver.
Be humbled, but not too much. Humans have never been good at doing the same thing repeatedly with high accuracy, while machines have been outdoing us in this regard for millennia. Just watch a few “How It’s Made” videos on YouTube and consider what it would take for humans to do these jobs.
nermal
> CB
02/24/2018 at 10:46 | 1 |
I wonder how the cars will react to driving up to a pothole that somebody has painted a weiner shape around in protest.
I am also more amused than I should be that somewhere there is at least one engineer trying to figure out how an autonomous car should react to a pothole with a weiner shape painted around it.
wafflesnfalafel
> RamblinRover Luxury-Yacht
02/24/2018 at 10:56 | 1 |
yes- individual human control aligns incentives to prevent issues - the average person does not want to crash their car and has a significant incentive not to have that happen. And on the flip side, I cannot trust an AI team to do any better job at programming a self driving vehicle than the average human driver. It’s only as good as it’s human based direction. The whole concept is really just a super complicated, ramped up automated tram - sure, they work most of the time, until something fails or a bit of code can’t deal with a particular situation and they crash.
And don’t take this as trying to convince folks not to use AI vehicles - they have many uses and could be very helpful to some people. But I’m going to trust their ability exactly as far as I trust any other human piloted vehicle, (which isn’t very far...)
Urambo Tauro
> CB
02/24/2018 at 11:42 | 0 |
I think it’s completely appropriate to demand perfection from autonomous vehicles.
HAV’s can’t say “oops, didn’t see you there”. That’s a shitty excuse for human drivers anyway who can’t be bothered to LOOK. In the case of HAVs, it’s even less acceptable, since they’re continually monitoring 360°.
And HAVs can’t claim temporary lapses in judgement, either. If one happens to be “off its game”, and crashes into another car because it didn’t apply the brakes, then all HAVs will fall under suspicion of being prone to the exact same failure. You’re not talking about just one bad driver who screwed up. You’re talking about an entire fleet of cars that might very well do the same thing. This needs to be fixed before the cars are even introduced to the public.
My bird IS the word
> CB
02/24/2018 at 12:17 | 0 |
It’s a blame game. Not my fault! Car crashed itself. Sue Ford.
RamblinRover Luxury-Yacht
> If only EssExTee could be so grossly incandescent
02/24/2018 at 14:37 | 0 |
Even basic stuff like “that guy is leaning right in his lane, that probably means he’s thinking about turning off at this exit”. That is not stuff that’s easy to program for - at all - the kind of instinctive intra-group understanding of other humans doing what they do.
If only EssExTee could be so grossly incandescent
> RamblinRover Luxury-Yacht
02/24/2018 at 14:52 | 0 |
Yup, and that all comes down to the near instantaneous data processing that the human brain is capable of. Until we can perfectly simulate a brain we’re stuck with these debates over self-driving ethics and blame.